ai reliance
Barriers to AI Adoption: Image Concerns at Work
Concerns about how workers are perceived can deter effective collaboration with artificial intelligence (AI). In a field experiment on a large online labor market, I hired 450 U.S.-based remote workers to complete an image-categorization job assisted by AI recommendations. Workers were incentivized by the prospect of a contract extension based on an HR evaluator's feedback. I find that workers adopt AI recommendations at lower rates when their reliance on AI is visible to the evaluator, resulting in a measurable decline in task performance. The effects are present despite a conservative design in which workers know that the evaluator is explicitly instructed to assess expected accuracy on the same AI-assisted task. This reduction in AI reliance persists even when the evaluator is reassured about workers' strong performance history on the platform, underscoring how difficult these concerns are to alleviate. Leveraging the platform's public feedback feature, I introduce a novel incentive-compatible elicitation method showing that workers fear heavy reliance on AI signals a lack of confidence in their own judgment, a trait they view as essential when collaborating with AI.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Mexico (0.04)
- Asia > Pakistan (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Banking & Finance > Economy (0.48)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.34)
Large Language Model Use Impact Locus of Control
Fu, Jenny Xiyu, Antone, Brennan, Kadoma, Kowe, Jung, Malte
As AI tools increasingly shape how we write, they may also quietly reshape how we perceive ourselves. This paper explores the psychological impact of co-writing with AI on people's locus of control. Through an empirical study with 462 participants, we found that employment status plays a critical role in shaping users' reliance on AI and their locus of control. Current results demonstrated that employed participants displayed higher reliance on AI and a shift toward internal control, while unemployed users tended to experience a reduction in personal agency. Through quantitative results and qualitative observations, this study opens a broader conversation about AI's role in shaping personal agency and identity.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.94)
- Government (0.68)
- Health & Medicine > Therapeutic Area (0.47)
Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making
Haduong, Nikita, Smith, Noah A.
The potential is not necessarily realized, however, because of several challenges: debates on ethical resposibility of decisions [8, 26, 44], the human ability to recognize when AI advice should be taken [43], mental models (biases) regarding AI performance and ability [12, 27] to perform well on subjective tasks, and effects of how the AI advice is delivered [46]. Many research directions thus aim to resolve these barriers to complementarity in human-AI performance, including examining the effects of having AI systems explain predictions [4] using explainable AI (XAI) methods, introducing cognitive forcing functions when presenting AI advice [6], adjusting AI advice interactions/presentation methods [40], and adjusting task framing to account for mental models about the types of tasks AI can work with [9]. In AI-assisted decision making, the human makes the final decision, bearing full responsibility for its consequences. Performance pressure from responsibility can influence decision making behavior [2]. The bulk of research working towards complementary human-AI performance isolates human behavior away from the effects of performance pressure because the field is rapidly evolving its understanding of how humans perceive and work with AI tools. Intrinsically high and low stakes tasks are used in these experiments, but the stakes have little tangible effect or implication for evaluators. Hence, we observe a gap in the literature of how people rely on AI assistants under performance pressure, or when stakes matter personally. In this work, we seek to understand how performance pressure affects AI advice usage when AI advice is provided as a second opinion. We induce performance pressure through a pay-by-performance scheme framed as a loss.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine (0.93)
A Survey of AI Reliance
Eckhardt, Sven, Kühl, Niklas, Dolata, Mateusz, Schwabe, Gerhard
Artificial intelligence (AI) systems have become an indispensable component of modern technology. However, research on human behavioral responses is lagging behind, i.e., the research into human reliance on AI advice (AI reliance). Current shortcomings in the literature include the unclear influences on AI reliance, lack of external validity, conflicting approaches to measuring reliance, and disregard for a change in reliance over time. Promising avenues for future research include reliance on generative AI output and reliance in multi-user situations. In conclusion, we present a morphological box that serves as a guide for research on AI reliance.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Germany > Bavaria > Upper Franconia > Bayreuth (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (22 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Banking & Finance (1.00)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.93)
- (2 more...)